Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
J Sch Psychol ; 101: 101251, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37951664

RESUMO

Due to their promise as a feasible tool for evaluating the effects of school-based interventions, Direct Behavior Ratings (DBR) have received much research attention over the past 2 decades. Although DBR methodology has demonstrated much promise, favorable psychometric characteristics only have been demonstrated for tools measuring a small number of constructs. Likewise, although a variety of methods of DBR have been proposed, most extant studies have focused on the use of single-item methods. The present study examined the dependability of four methods of formative behavioral assessment (i.e., single-item and multi-item ratings administered either daily [DBR] or weekly [formative behavior rating measures or FBRM]) across eight psychological constructs (i.e., interpersonal skills, academic engagement, organizational skills, disruptive behavior, oppositional behavior, interpersonal conflict, anxious depressed, and social withdrawal). School-based professionals (N = 91; i.e., teachers, paraprofessionals, and intervention specialists) each rated one student across all eight constructs after being assigned to one of the four assessment conditions. Dependability estimates varied substantially across methods and constructs (range = 0.75-0.96), although findings of the present study support the use of the broad set of formative assessment tools evaluated.


Assuntos
Comportamento Problema , Humanos , Escala de Avaliação Comportamental , Instituições Acadêmicas , Habilidades Sociais , Ansiedade
2.
J Sch Health ; 2023 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-37933437

RESUMO

BACKGROUND: Adoption of the Whole School, Whole Community, Whole Child (WSCC) model has been slowed by a lack of available tools to support implementation. The Wellness School Assessment Tool (WellSAT) WSCC is an online assessment tool that allows schools to evaluate the alignment of their policies with the WSCC model. This study assesses the usability of the WellSAT WSCC. METHODS: Using a convergent mixed methods design, we collected qualitative and quantitative data from 5 school-based participants with roles in development and evaluation of policy. Participants explored the platform while engaging in a think-aloud procedure and scored a sample policy using the platform. They also completed the System Usability Scale and responded to open-ended questions about the usability of the platform. RESULTS: Participants rated the WellSAT WSCC as an above-average user experience, but data suggested several areas for improvement, including improved instructions, enhanced visual design of the platform, and guidance for subsequent policy changes. CONCLUSION: The WellSAT WSCC provides an above-average user experience but can be improved to increase user experience. These improvements increase the potential for greater use to facilitate integration of the WSCC model into school policy.

3.
Educ Treat Children ; 45(3): 245-262, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35919259

RESUMO

Research conducted to date has highlighted barriers to initial adoption of universal behavior screening in schools. However, little is known regarding the experiences of those implementing these procedures and there have been no studies conducted examining the experiences of educators in different stages of implementing various tiered systems of supports. Universal screening is foundational to a successful Comprehensive, Integrated Three-Tiered (Ci3T) model of prevention-an integrated tiered system addressing academics, behavior, and social and emotional well-being. Therefore, the perspectives of Ci3T Leadership Team members at different stages of Ci3T implementation were solicited through an online survey that sought to understand (1) current school-based screening practices and (2) individual beliefs regarding those practices. A total of 165 Ci3T Leadership Team members representing five school districts from three geographic regions across the United States, all of whom were participating in an Institute of Education Sciences Network grant examining integrated tiered systems, reported the screening procedures were generally well-understood and feasible to implement. At the same time, results highlighted continuing professional learning may be beneficial in the areas of: (1) integrating multiple sources of data (e.g., screening data with other data collected as regular school practices) and (2) using those multiple data sources to determine next steps for intervention. We discuss educational implications, limitations, and directions for future inquiry.

4.
J Sch Psychol ; 81: 28-46, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32711722

RESUMO

Direct Behavior Rating (DBR) is a tool designed for the assessment of behavioral changes over time. Unlike methods for summative evaluations, the development of progress monitoring tools requires evaluation of sensitivity to change. The present study aimed to evaluate this psychometric feature of five newly developed DBR Multi-Item Scales (DBR-MIS). Teachers identified students with behaviors interfering with their learning or the learning of others and implemented a Daily Report Card (DRC) intervention in the classroom settings for two months. The analyses were performed on 31 AB single case studies. Change metrics were calculated at an individual level by using Tau-UA vs. B + trend B and Hedges' g and at a scale-level by using Mixed Effect Meta-Analysis, Hierarchical Linear Models (HLMs), and Between-Case Standardized Mean Difference (BC-SMD). HLMs were estimated considering both fixed and random effects of intervention and linear trend within the intervention phase. The results supported sensitivity to change for three DBR-MIS (i.e., Academic Engagement, Organizational Skills, and Disruptive Behavior), and the relative magnitudes were consistent across the metrics. Sensitivity to change of DBR-MIS Interpersonal Skills received moderate support. Conversely, empirical evidence was not provided for sensitivity to change of DBR-MIS Oppositional Behavior. Particular emphasis was placed on the intervention trend in that responses to behavioral interventions might occur gradually or require consistency over time in order to be observed by raters. Implications for the use of the new DBR-MIS in the context of progress monitoring of social-emotional behaviors are discussed.


Assuntos
Escala de Avaliação Comportamental/normas , Comportamento Infantil/psicologia , Instituições Acadêmicas , Estudantes/psicologia , Criança , Emoções , Feminino , Humanos , Masculino , Psicometria , Habilidades Sociais , Estados Unidos
5.
J Sch Health ; 90(4): 264-270, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31984528

RESUMO

BACKGROUND: Although recent studies provide information regarding state-level policies and district-level practices regarding social, emotional, and behavioral screening, the degree to which these policies influence screening practices is unknown. As such, the purpose of this exploratory study was to compare state- and district-level policies and reported practices around school-based social, emotional, and behavioral screening. METHODS: We obtained data for the present study from three sources: (1) a recent systematic review of state department of education websites; (2) a national survey of 1330 US school districts; and (3) a Web search and review of policy manuals published by the 1330 school districts. Comparative analyses were used to identify similarities and differences across state and district policies and practices. RESULTS: Of the 1330 districts searched, 911 had policy manuals available for review; 87 of these policy manuals, which represented 10 states, met inclusion criteria, and thus, were included in analyses. Discrepancies were found across state and district policies and across state social, emotional, and behavioral screening guidance and district practices, but consistencies did exist across district policies within the same state. CONCLUSION: District-level guidance around social, emotional, and behavioral screening appears to be limited. Our findings suggest a disconnect between state- and district-level social, emotional, and behavioral screening guidance and district reported practices, which signifies the need to identify the main influences on district- and school-level screening practices.


Assuntos
Política de Saúde , Transtornos Mentais/diagnóstico , Serviços de Saúde Escolar/estatística & dados numéricos , Instituições Acadêmicas , Adolescente , Comportamento do Adolescente , Criança , Comportamento Infantil , Feminino , Humanos , Masculino , Inquéritos e Questionários , Estados Unidos
6.
Sch Psychol ; 35(1): 51-60, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30883160

RESUMO

[Correction Notice: An Erratum for this article was reported online in School Psychology on Dec 30 2019 (see record 2019-80953-001). In the fourth paragraph of the "Understanding the Factors That Influence Usage" section and in the "Usage Rating Profile for Supporting Students' Behavioral Needs (URP-NEEDS)" section, the URP-NEEDS was incorrectly reported to have 23 items. This measure consists of 24 items. This item was also missing in the Appendix under the "Understanding" factor: "School personnel understand how goals for social, emotional, and behavioral screening fit with a system of student supports." All versions of this article have been corrected.] Previous research has suggested that multiple factors beyond acceptability alone (e.g., feasibility, external supports) may interact to determine whether consumers will use an intervention or assessment in practice. The Usage Rating Profile for Supporting Students' Behavioral Needs (URP-NEEDS) was developed in order to provide a simultaneous assessment of those factors influencing use of a particular approach to identifying and supporting the social, emotional, and behavioral needs of students. As the measure was intended for use with a range of school-based stakeholders, a first necessary step involved establishing the measurement invariance of the instrument. Participants in the current study included 1,112 district administrators, 431 building administrators, and 1,355 teachers who were asked to identify the approach used within their school district to identify and support the social, emotional, and behavioral needs of students, and then to complete the URP-NEEDS in reference to this identified approach. Results supported the measurement invariance of the URP-NEEDS across stakeholder groups. In addition, measurement invariance was found across self-identified approaches to social, emotional, and behavioral risk identification within the district administrator and teacher groups. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Assuntos
Pessoal de Educação , Psicometria/normas , Instituições Acadêmicas , Estudantes , Adolescente , Adulto , Criança , Humanos , Participação dos Interessados , Estudantes/psicologia
7.
Res Dev Disabil ; 79: 33-52, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29853335

RESUMO

Evidence-based practice as a process requires the appraisal of research as a critical step. In the field of developmental disabilities, single-case experimental designs (SCEDs) figure prominently as a means for evaluating the effectiveness of non-reversible instructional interventions. Comparative SCEDs contrast two or more instructional interventions to document their relative effectiveness and efficiency. As such, these designs have great potential to inform evidence-based decision-making. To harness this potential, however, interventionists and authors of systematic reviews need tools to appraise the evidence generated by these designs. Our literature review revealed that existing tools do not adequately address the specific methodological considerations of comparative SCEDs that aim to compare instructional interventions of non-reversible target behaviors. The purpose of this paper is to introduce the Comparative Single-Case Experimental Design Rating System (CSCEDARS, "cedars") as a tool for appraising the internal validity of comparative SCEDs of two or more non-reversible instructional interventions. Pertinent literature will be reviewed to establish the need for this tool and to underpin the rationales for individual rating items. Initial reliability information will be provided as well. Finally, directions for instrument validation will be proposed.


Assuntos
Deficiências do Desenvolvimento/terapia , Educação Especial/métodos , Projetos de Pesquisa/normas , Prática Clínica Baseada em Evidências/normas , Humanos , Melhoria de Qualidade , Reprodutibilidade dos Testes , Tamanho da Amostra
8.
J Sch Psychol ; 66: 25-40, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29429493

RESUMO

The current study represents the first psychometric evaluation of an American English into German translation of a school-based universal screening measure designed to assess academic and disruptive behavior problems. This initial study examines the factor structure and diagnostic accuracy of the newly translated measure in a large sample of 1009 German schoolchildren attending grades 1-6 in Western Germany. Confirmatory factor analysis supported a two-factor model for both male- and female- students. Configural invariance was supported between male- and female-samples. However scalar invariance was not supported, with higher thresholds for ratings of female students. Results of receiver operating characteristic (ROC) curve analyses were indicative of good to excellent diagnostic accuracy with areas under the curve ranging from 0.89 to 0.93. Optimal cut-off scores were 10, 5, and 13 for the Academic Productivity/Disorganization, Oppositional/Disruptive, and the Total Problems Composite scores of the Integrated System Teacher Rating Form respectively. This initial study of the newly translated measure supports further investigations into its utility for universal screening in German speaking schools.


Assuntos
Transtornos do Comportamento Infantil/diagnóstico , Comportamento Problema/psicologia , Estudantes/psicologia , Criança , Transtornos do Comportamento Infantil/psicologia , Feminino , Alemanha , Humanos , Masculino , Programas de Rastreamento , Psicometria , Professores Escolares , Instituições Acadêmicas
9.
Sch Psychol Q ; 32(2): 212-225, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-26928387

RESUMO

This study examines the classification accuracy and teacher acceptability of a problem-focused screener for academic and disruptive behavior problems, which is directly linked to evidence-based intervention. Participants included 39 classroom teachers from 2 public school districts in the Northeastern United States. Teacher ratings were obtained for 390 students in Grades K-6. Data from the screening instrument demonstrate favorable classification accuracy, and teacher ratings of feasibility and acceptability support the use of the measure for universal screening in elementary school settings. Results indicate the novel measure should facilitate classroom intervention for problem behaviors by identifying at-risk students and informing targets for daily behavior report card interventions. (PsycINFO Database Record


Assuntos
Transtornos de Deficit da Atenção e do Comportamento Disruptivo/diagnóstico , Transtornos do Comportamento Infantil/diagnóstico , Comportamento Infantil/psicologia , Comportamento Problema/psicologia , Estudantes/psicologia , Adolescente , Comportamento do Adolescente/psicologia , Transtornos de Deficit da Atenção e do Comportamento Disruptivo/psicologia , Criança , Transtornos do Comportamento Infantil/psicologia , Pré-Escolar , Feminino , Humanos , Masculino , Programas de Rastreamento/métodos , Professores Escolares , Instituições Acadêmicas
10.
Sch Psychol Q ; 32(1): 22-34, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-27280360

RESUMO

In this study, generalizability theory was used to examine the extent to which (a) time-sampling methodology, (b) number of simultaneous behavior targets, and (c) individual raters influenced variance in ratings of academic engagement for an elementary-aged student. Ten graduate-student raters, with an average of 7.20 hr of previous training in systematic direct observation and 58.20 hr of previous direct observation experience, scored 6 videos of student behavior using 12 different time-sampling protocols. Five videos were submitted for analysis, and results for observations using momentary time-sampling and whole-interval recording suggested that the majority of variance was attributable to the rating occasion, although results for partial-interval recording generally demonstrated large residual components comparable with those seen in prior research. Dependability coefficients were above .80 when averaging across 1 to 2 raters using momentary time-sampling, and 2 to 3 raters using whole-interval recording. Ratings derived from partial-interval recording needed to be averaged over 3 to 7 raters to demonstrate dependability coefficients above .80. (PsycINFO Database Record


Assuntos
Pesquisa Comportamental/métodos , Comportamento Infantil , Psicologia Educacional/métodos , Criança , Feminino , Humanos , Masculino , Gravação em Vídeo
11.
Sch Psychol Q ; 30(3): 431-442, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25730160

RESUMO

Direct Behavior Rating-Multi-Item Scales (DBR-MIS) have been developed as formative measures of behavioral assessment for use in school-based problem-solving models. Initial research has examined the dependability of composite scores generated by summing all items comprising the scales. However, it has been argued that DBR-MIS may offer assessment of 2 levels of behavioral specificity (i.e., item-level, global composite-level). Further, it has been argued that scales can be individualized for each student to improve efficiency without sacrificing technical characteristics. The current study examines the dependability of 5 items comprising a DBR-MIS designed to measure classroom disruptive behavior. A series of generalizability theory and decision studies were conducted to examine the dependability of each item (calls out, noisy, clowns around, talks to classmates and out of seat), as well as a 3-item composite that was individualized for each student. Seven graduate students rated the behavior of 9 middle-school students on each item over 3 occasions. Ratings were based on 10-min video clips of students during mathematics instruction. Separate generalizability and decision studies were conducted for each item and for a 3-item composite that was individualized for each student based on the highest rated items on the first rating occasion. Findings indicate favorable dependability estimates for 3 of the 5 items and exceptional dependability estimates for the individualized composite.


Assuntos
Transtornos de Deficit da Atenção e do Comportamento Disruptivo/diagnóstico , Escala de Avaliação Comportamental/normas , Análise de Variância , Criança , Humanos , Matemática , New England , Psicometria , Serviços de Saúde Escolar , Sensibilidade e Especificidade
12.
Sch Psychol Q ; 30(1): 37-49, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24708285

RESUMO

Although there is much research to support the effectiveness of classwide interventions aimed at improving student engagement, there is also a great deal of variability in terms of how response to group-level intervention has been measured. The unfortunate consequence of this procedural variability is that it is difficult to determine whether differences in obtained results across studies are attributable to the way in which behavior was measured or actual intervention effectiveness. The purpose of this study was to comparatively evaluate the most commonly used observational methods for monitoring the effects of classwide interventions in terms of the degree to which obtained data represented actual behavior. The 5 most common sampling methods were identified and evaluated against a criterion generated by averaging across observations conducted on 14 students in one seventh-grade classroom. Results suggested that the best approximation of mean student engagement was obtained by observing a different student during each consecutive 15-s interval whereas observing an entire group of students during each interval underestimated the mean level of behavior within a phase and the degree of behavior change across phases. In contrast, when observations were restricted to the 3 students with the lowest levels of engagement, data revealed greater variability in engagement across baseline sessions and suggested a more notable change in student behavior subsequent to intervention implementation.


Assuntos
Técnicas de Observação do Comportamento , Matemática/educação , Estudantes , Adolescente , Criança , Humanos , Variações Dependentes do Observador , Instituições Acadêmicas , Gravação em Vídeo
13.
Sch Psychol Q ; 29(4): 438-451, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25485466

RESUMO

This study examines the factor structure, reliability and validity of a novel school-based screening instrument for academic and disruptive behavior problems commonly experienced by children and adolescents with attention deficit hyperactivity disorder (ADHD). Participants included 39 classroom teachers from two public school districts in the northeastern United States. Teacher ratings were obtained for 390 students in grades K-6. Exploratory factor analysis supports a two-factor structure (oppositional/disruptive and academic productivity/disorganization). Data from the screening instrument demonstrate favorable internal consistency, temporal stability and convergent validity. The novel measure should facilitate classroom intervention for problem behaviors associated with ADHD by identifying at-risk students and determining specific targets for daily behavior report card interventions.


Assuntos
Comportamento do Adolescente/psicologia , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Comportamento Infantil/psicologia , Programas de Rastreamento/normas , Adolescente , Criança , Pré-Escolar , Medicina Baseada em Evidências , Análise Fatorial , Feminino , Humanos , Masculino , New England , Reprodutibilidade dos Testes , Instituições Acadêmicas , Estudantes/psicologia
14.
J Sch Psychol ; 52(1): 13-35, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24495492

RESUMO

Generalizability Theory (GT) offers increased utility for assessment research given the ability to concurrently examine multiple sources of variance, inform both relative and absolute decision making, and determine both the consistency and generalizability of results. Despite these strengths, assessment researchers within the fields of education and psychology have been slow to adopt and utilize a GT approach. This underutilization may be due to an incomplete understanding of the conceptual underpinnings of GT, the actual steps involved in designing and implementing generalizability studies, or some combination of both issues. The goal of the current article is therefore two-fold: (a) to provide readers with the conceptual background and terminology related to the use of GT and (b) to facilitate understanding of the range of issues that need to be considered in the design, implementation, and interpretation of generalizability and dependability studies. Given the relevance of this analytic approach to applied assessment contexts, there exists a need to ensure that GT is both accessible to, and understood by, researchers in education and psychology. Important methodological and analytical considerations are presented and implications for applied use are described.


Assuntos
Psicologia Educacional , Projetos de Pesquisa , Humanos , Psicometria , Reprodutibilidade dos Testes , Instituições Acadêmicas
15.
Sch Psychol Q ; 29(2): 171-181, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24274156

RESUMO

Although generalizability theory has been used increasingly in recent years to investigate the dependability of behavioral estimates, many of these studies have relied on use of general education populations as opposed to those students who are most likely to be referred for assessment due to problematic classroom behavior (e.g., inattention, disruption). The current study investigated the degree to which differences exist in terms of the magnitude of both variance component estimates and dependability coefficients between students nominated by their teachers for Tier 2 interventions due to classroom behavior problems and a general classroom sample (i.e., including both nominated and non-nominated students). The academic engagement levels of 16 (8 nominated, 8 non-nominated) middle school students were measured by 4 trained observers using momentary time-sampling procedures. A series of G and D studies were then conducted to determine whether the 2 groups were similar in terms of the (a) distribution of rating variance and (b) number of observations needed to achieve an adequate level of dependability. Results suggested that the behavior of students in the teacher-nominated group fluctuated more across time and that roughly twice as many observations would therefore be required to yield similar levels of dependability compared with the combined group. These findings highlight the importance of constructing samples of students that are comparable to those students with whom the measurement method is likely to be applied when conducting psychometric investigations of behavioral assessment tools.


Assuntos
Logro , Comportamento Infantil/psicologia , Instituições Acadêmicas , Estudantes/psicologia , Criança , Feminino , Humanos , Masculino , Psicometria , Reprodutibilidade dos Testes
16.
J Sch Psychol ; 51(1): 81-96, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23375174

RESUMO

Although treatment acceptability was originally proposed as a critical factor in determining the likelihood that a treatment will be used with integrity, more contemporary findings suggest that whether something is likely to be adopted into routine practice is dependent on the complex interplay among a number of different factors. The Usage Rating Profile-Intervention (URP-I; Chafouleas, Briesch, Riley-Tillman, & McCoach, 2009) was recently developed to assess these additional factors, conceptualized as potentially contributing to the quality of intervention use and maintenance over time. The purpose of the current study was to improve upon the URP-I by expanding and strengthening each of the original four subscales. Participants included 1005 elementary teachers who completed the instrument in response to a vignette depicting a common behavior intervention. Results of exploratory and confirmatory factor analyses, as well as reliability analyses, supported a measure containing 29 items and yielding 6 subscales: Acceptability, Understanding, Feasibility, Family-School Collaboration, System Climate, and System Support. Collectively, these items provide information about potential facilitators and barriers to usage that exist at the level of the individual, intervention, and environment. Information gleaned from the instrument is therefore likely to aid consultants in both the planning and evaluation of intervention efforts.


Assuntos
Docentes , Instituições Acadêmicas , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Estudantes
17.
Sch Psychol Q ; 27(4): 187-197, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23294233

RESUMO

Although direct observation is one of the most frequently used assessment methods by school psychologists, studies have shown that the number of observations needed to obtain a dependable estimate of student behavior may be impractical. Because direct observation may be used to inform important decisions about students, it is crucial that data be reliable. Preliminary research has suggested that dependability may be improved by extending the length of individual observations. The purpose of the current study was, therefore, to examine how changes in observational duration affect the dependability of student engagement data. Twenty seventh grade students were each observed for 30-min across 2 days during math instruction. Generalizability theory was then used to calculate reliability-like coefficients for the purposes of intraindividual decision making. Across days, acceptable levels of dependability for progress monitoring (i.e., .70) were achieved through two 30-min observations, three 15-min observations, or four to five 10-min observations. Acceptable levels of dependability for higher stakes decisions (i.e., .80) required over an hour of cumulative observation time. Within a given day, a 15 minute observation was found to be adequate for making low-stakes decisions whereas an hour long observation was necessary for high-stakes decision making. Limitations of the current study and implications for research and practice are discussed.


Assuntos
Comportamento do Adolescente/psicologia , Psicologia Educacional/métodos , Projetos de Pesquisa , Estudantes/psicologia , Estudantes/estatística & dados numéricos , Adolescente , Feminino , Humanos , Masculino , New England , Variações Dependentes do Observador , Psicometria , Reprodutibilidade dos Testes , Fatores de Tempo , População Urbana
18.
J Sch Psychol ; 49(1): 131-55, 2011 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-21215839

RESUMO

Although the efficiency with which a wide range of behavioral data can be obtained makes behavior rating scales particularly attractive tools for the purposes of screening and evaluation, feasibility concerns arise in the context of formative assessment. Specifically, informant load, or the amount of time informants are asked to contribute to the assessment process, likely has a negative impact on the quality of data over time and the informant's willingness to participate. Two important determinants of informant load in progress monitoring are the length of the rating scale (i.e., the number of items) and how frequently informants are asked to provide ratings (i.e., the number of occasions). The purpose of the current study was to investigate the dependability of the IOWA Conners Teacher Rating Scale (Loney & Milich, 1982), which is used to differentiate inattentive-overactive from oppositional-defiant behaviors. Specifically, the facets of items and occasions were examined to identify combinations of these sources of error necessary to reach an acceptable level of dependability for both absolute and relative decisions. Results from D studies elucidated a variety of possible item-occasion combinations reaching the criteria for adequate dependability. Recommendations for research and practice are discussed.


Assuntos
Transtornos de Deficit da Atenção e do Comportamento Disruptivo/diagnóstico , Transtornos do Comportamento Infantil/diagnóstico , Escalas de Graduação Psiquiátrica , Adolescente , Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Transtorno do Deficit de Atenção com Hiperatividade/tratamento farmacológico , Transtorno do Deficit de Atenção com Hiperatividade/psicologia , Transtornos de Deficit da Atenção e do Comportamento Disruptivo/tratamento farmacológico , Transtornos de Deficit da Atenção e do Comportamento Disruptivo/psicologia , Estimulantes do Sistema Nervoso Central/uso terapêutico , Criança , Transtornos do Comportamento Infantil/tratamento farmacológico , Transtornos do Comportamento Infantil/psicologia , Feminino , Humanos , Masculino , Metilfenidato/uso terapêutico , Escalas de Graduação Psiquiátrica/normas , Reprodutibilidade dos Testes
19.
J Sch Psychol ; 48(3): 219-46, 2010 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20380948

RESUMO

A total of 4 raters, including 2 teachers and 2 research assistants, used Direct Behavior Rating Single Item Scales (DBR-SIS) to measure the academic engagement and disruptive behavior of 7 middle school students across multiple occasions. Generalizability study results for the full model revealed modest to large magnitudes of variance associated with persons (students), occasions of measurement (day), and associated interactions. However, an unexpectedly low proportion of the variance in DBR data was attributable to the facet of rater, as well as a negligible variance component for the facet of rating occasion nested within day (10-min interval within a class period). Results of a reduced model and subsequent decision studies specific to individual rater and rater type (research assistant and teacher) suggested degree of reliability-like estimates differed substantially depending on rater. Overall, findings supported previous recommendations that in the absence of estimates of rater reliability and firm recommendations regarding rater training, ratings obtained from DBR-SIS, and subsequent analyses, be conducted within rater. Additionally, results suggested that when selecting a teacher rater, the person most likely to substantially interact with target students during the specified observation period may be the best choice.


Assuntos
Transtornos do Comportamento Infantil , Comportamento Infantil , Avaliação Educacional/métodos , Estudantes , Análise de Variância , Criança , Feminino , Humanos , Masculino , Modelos Estatísticos , Avaliação de Programas e Projetos de Saúde , Instituições Acadêmicas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...